Search results for "Estimation of distribution algorithm"
showing 9 items of 9 documents
Applications of Evolutionary Computation
2011
EvoCOMPLEX Contributions.- Coevolutionary Dynamics of Interacting Species.- Evolving Individual Behavior in a Multi-agent Traffic Simulator.- On Modeling and Evolutionary Optimization of Nonlinearly Coupled Pedestrian Interactions.- Revising the Trade-off between the Number of Agents and Agent Intelligence.- Sexual Recombination in Self-Organizing Interaction Networks.- Symbiogenesis as a Mechanism for Building Complex Adaptive Systems: A Review.- EvoGAMES Contributions.- Co-evolution of Optimal Agents for the Alternating Offers Bargaining Game.- Fuzzy Nash-Pareto Equilibrium: Concepts and Evolutionary Detection.- An Evolutionary Approach for Solving the Rubik's Cube Incorporating Exact Met…
Session details: Track 5: estimation of distribution algorithms
2009
Denoising Autoencoders for Fast Combinatorial Black Box Optimization
2015
Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Autoencoders (AE) are generative stochastic networks with these desired properties. We integrate a special type of AE, the Denoising Autoencoder (DAE), into an EDA and evaluate the performance of DAE-EDA on several combinatorial optimization problems with a single objective. We asses the number of fitness evaluations as well as the required CPU times. We compare the results to the performance to the Bayesian Optimization Algorithm (BOA) and RBM-EDA, another EDA which is based on a generative neural network which has proven competitive with BOA. For the considered pro…
Scalability of using Restricted Boltzmann Machines for Combinatorial Optimization
2014
Abstract Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and complexity. The results are compared to the Bayesian Optimization Algorithm (BOA), a state-of-the-art multivariate EDA, and the Dependency Tree Algorithm (DTA), which uses a simpler probability model requiring less computati…
Decomposition of Dynamic Single-Product and Multi-product Lotsizing Problems and Scalability of EDAs
2008
In existing theoretical and experimental work, Estimation of Distribution Algorithms (EDAs) are primarily applied to decomposable test problems. State-of-the-art EDAs like the Hierarchical Bayesian Optimization Algorithm (hBOA), the Learning Factorized Distribution Algorithm (LFDA) or Estimation of Bayesian Networks Algorithm (EBNA) solve these problems in polynomial time. Regarding this success, it is tempting to apply EDAs to real-world problems. But up to now, it has rarely been analyzed which real-world problems are decomposable. The main contribution of this chapter is twofold: (1) It shows that uncapacitated single-product and multi-product lotsizing problems are decomposable. (2) A s…
Benchmarking parameter-free AMaLGaM on functions with and without noise.
2013
We describe a parameter-free estimation-of-distribution algorithm (EDA) called the adapted maximum-likelihood Gaussian model iterated density-estimation evolutionary algorithm (AMaLGaM-ID[Formula: see text]A, or AMaLGaM for short) for numerical optimization. AMaLGaM is benchmarked within the 2009 black box optimization benchmarking (BBOB) framework and compared to a variant with incremental model building (iAMaLGaM). We study the implications of factorizing the covariance matrix in the Gaussian distribution, to use only a few or no covariances. Further, AMaLGaM and iAMaLGaM are also evaluated on the noisy BBOB problems and we assess how well multiple evaluations per solution can average ou…
An implicitly parallel EDA based on restricted boltzmann machines
2014
We present a parallel version of RBM-EDA. RBM-EDA is an Estimation of Distribution Algorithm (EDA) that models dependencies between decision variables using a Restricted Boltzmann Machine (RBM). In contrast to other EDAs, RBM-EDA mainly uses matrix-matrix multiplications for model estimation and sampling. Hence, for implementation, standard libraries for linear algebra can be used. This allows an easy parallelization and leads to a high utilization of parallel architectures. The probabilistic model of the parallel version and the version on a single core are identical. We explore the speedups gained from running RBM-EDA on a Graphics Processing Unit. For problems of bounded difficulty like …
DAE-GP
2020
Estimation of distribution genetic programming (EDA-GP) algorithms are metaheuristics where sampling new solutions from a learned probabilistic model replaces the standard mutation and recombination operators of genetic programming (GP). This paper presents DAE-GP, a new EDA-GP which uses denoising autoencoder long short-term memory networks (DAE-LSTMs) as probabilistic model. DAE-LSTMs are artificial neural networks that first learn the properties of a parent population by mapping promising candidate solutions to a latent space and reconstructing the candidate solutions from the latent space. The trained model is then used to sample new offspring solutions. We show on a generalization of t…
An analysis of the bias of variation operators of estimation of distribution programming
2018
Estimation of distribution programming (EDP) replaces standard GP variation operators with sampling from a learned probability model. To ensure a minimum amount of variation in a population, EDP adds random noise to the probabilities of random variables. This paper studies the bias of EDP's variation operator by performing random walks. The results indicate that the complexity of the EDP model is high since the model is overfitting the parent solutions when no additional noise is being used. Adding only a low amount of noise leads to a strong bias towards small trees. The bias gets stronger with an increased amount of noise. Our findings do not support the hypothesis that sampling drift is …